Constrained Optimization with a Continuous Hoppeld-lagrange Model
نویسنده
چکیده
In this paper, a generalized Hoppeld model with continuous neurons using Lagrange multipliers, originally introduced in 12], is thoroughly analysed. We have termed the model the Hoppeld-Lagrange model. It can be used to resolve constrained optimization problems. In the theoretical part, we present a simple explanation of a fundamental energy term of the continuous Hoppeld model. This term has caused some confusion as reported in 11]. It led to some misinterpretations which will be corrected. Next, a new Lyapunov function is derived which, under some dynamical conditions, guarantees stability of the the system. We explain why a certain type of frequently used quadratic constraints can degenerate the Hoppeld-Lagrange model to a penalty method. Furthermore, a diiculty is described which may arise if the method is applied to problems with`hard constraints'. The theoretical results suggest a method of using the Hoppeld-Lagrange model. This method is described and applied to several problems like Weighted Matching, Crossbar Switch Scheduling and the Travelling Salesman Problem. The relevant theoretical results are applied and compared to the computational ones. Various formulations of the constraints are tried, of which one is a new approach, where a multiplier is used for every single constraint.
منابع مشابه
Bayesian Image Restoration and Segmentationby Constrained
A constrained optimization method, called the Lagrange-Hoppeld (LH) method, is presented for solving Markov random eld (MRF) based Bayesian image estimation problems for restoration and segmentation. The method combines the augmented Lagrangian mul-tiplier technique with the Hoppeld network to solve a constrained optimization problem into which the original Bayesian estimation problem is reform...
متن کاملImproving Convergence and Solution Quality ofHop eld - Type Neural Networks
Hoppeld-type networks convert a combinatorial optimization to a constrained real optimization and solve the latter using the penalty method. There is a dilemma with such networks: When tuned to produce good quality solutions, they can fail to converge to valid solutions; when tuned to converge, they tend to give low quality solutions. This paper proposes a new method, called the Augmented Lagra...
متن کاملRelaxation labeling using augmented lagrange-hopfield method
A novel relaxation labeling (RL) method is presented based on Augmented Lagrangian multipliers and the graded Hoppeld neural network (ALH). In this method, an RL problem is converted into a constrained optimization problem and solved by using the augmented Lagrangian and Hoppeld techniques. The ALH method yields results comparable to the best of the existing RL algorithms in terms of the optimi...
متن کاملImage Restoration and Segmentation by Constrained Optimization 3
(C) 1998 IEEE Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Abstract The combinatorial optimization problem of MAP esti...
متن کاملOn the Statistical Mechanics of (un)constrained Stochastic Hoppeld And`elastic' Neural Networks 1 Motivation and Results 2 Unconstrained Stochastic Hoppeld Networks 2.1 the Background: Classical Hoppeld Networks
Stochastic binary Hoppeld models are viewed from the angle of statistical mechanics. After an analysis of the unconstrained model using mean eld theory, a similar investigation is applied to a constrained model yielding comparable general explicit formulas of the free energy. Conditions are given for which some of the free energy expressions are Lyapunov functions of the corresponding diierenti...
متن کامل